Goto

Collaborating Authors

 governance challenge


Distributed and Decentralised Training: Technical Governance Challenges in a Shifting AI Landscape

Kryś, Jakub, Sharma, Yashvardhan, Egan, Janet

arXiv.org Artificial Intelligence

Advances in low-communication training algorithms are enabling a shift from centralised model training to compute setups that are either distributed across multiple clusters or decentralised via community-driven contributions. This paper distinguishes these two scenarios - distributed and decentralised training - which are little understood and often conflated in policy discourse. We discuss how they could impact technical AI governance through an increased risk of compute structuring, capability proliferation, and the erosion of detectability and shutdownability. While these trends foreshadow a possible new paradigm that could challenge key assumptions of compute governance, we emphasise that certain policy levers, like export controls, remain relevant. We also acknowledge potential benefits of decentralised AI, including privacy-preserving training runs that could unlock access to more data, and mitigating harmful power concentration. Our goal is to support more precise policymaking around compute, capability proliferation, and decentralised AI development.


Governance Challenges in Reinforcement Learning from Human Feedback: Evaluator Rationality and Reinforcement Stability

Alsagheer, Dana, Kamal, Abdulrahman, Kamal, Mohammad, Shi, Weidong

arXiv.org Artificial Intelligence

Reinforcement Learning from Human Feedback (RLHF) is central in aligning large language models (LLMs) with human values and expectations. However, the process remains susceptible to governance challenges, including evaluator bias, inconsistency, and the unreliability of feedback. This study examines how the cognitive capacity of evaluators, specifically their level of rationality, affects the stability of reinforcement signals. A controlled experiment comparing high-rationality and low-rationality participants reveals that evaluators with higher rationality scores produce significantly more consistent and expert-aligned feedback. In contrast, lower-rationality participants demonstrate considerable variability in their reinforcement decisions ($p < 0.01$). To address these challenges and improve RLHF governance, we recommend implementing evaluator pre-screening, systematic auditing of feedback consistency, and reliability-weighted reinforcement aggregation. These measures enhance the fairness, transparency, and robustness of AI alignment pipelines.


Regulatory Markets: The Future of AI Governance

Hadfield, Gillian K., Clark, Jack

arXiv.org Artificial Intelligence

Appropriately regulating artificial intelligence is an increasingly urgent policy challenge. Legislatures and regulators lack the specialized knowledge required to best translate public demands into legal requirements. Overreliance on industry self-regulation fails to hold producers and users of AI systems accountable to democratic demands. Regulatory markets, in which governments require the targets of regulation to purchase regulatory services from a private regulator, are proposed. This approach to AI regulation could overcome the limitations of both command-and-control regulation and self-regulation. Regulatory market could enable governments to establish policy priorities for the regulation of AI, whilst relying on market forces and industry R&D efforts to pioneer the methods of regulation that best achieve policymakers' stated objectives.


Intelligent automation in financial services: Leading the way

#artificialintelligence

The financial services sector has been an eager adopter of robotic process automation (RPA): by one estimate, it accounts for 29% of the RPA market, more than any other sector. So it stands to reason that the industry is an early adopter of intelligent automation, the combination of RPA with AI. "Financial services [institutions] have always been among of the top adopters of intelligent automation," says Sarah Burnett, industry analyst and evangelist at process mining vendor KYP.ai. Financial institutions have adopted a range of use cases for intelligent automation, from simple integrations of cognitive services into RPA systems to, in a few cases, AI-powered decision making. As such, they have also encountered the security risks and governance challenges that arise from intelligent automation sooner than most. Intelligent automation is a broad term, representing a range of possibilities for integrating AI and machine learning into process automation.


Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers

Zhang, Baobao | Anderljung, Markus (Centre for the Governance of AI, Future of Humanity Institute, University of Oxford) | Kahn, Lauren (Perry World House, University of Pennsylvania) | Dreksler, Noemi (Centre for the Governance of AI, Future of Humanity Institute, University of Oxford) | Horowitz, Michael C. (Perry World House, University of Pennsylvania) | Dafoe, Allan (Centre for the Governance of AI, Future of Humanity Institute, University of Oxford)

Journal of Artificial Intelligence Research

Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI, including through their work, advocacy, and choice of employment. Nevertheless, this influential group's attitudes are not well understood, undermining our ability to discern consensuses or disagreements between AI/ML researchers. To examine these researchers' views, we conducted a survey of those who published in two top AI/ML conferences (N = 524). We compare these results with those from a 2016 survey of AI/ML researchers (Grace et al., 2018) and a 2018 survey of the US public (Zhang & Dafoe, 2020). We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations to shape the development and use of AI in the public interest; moderate trust in most Western tech companies; and low trust in national militaries, Chinese tech companies, and Facebook. While the respondents were overwhelmingly opposed to AI/ML researchers working on lethal autonomous weapons, they are less opposed to researchers working on other military applications of AI, particularly logistics algorithms. A strong majority of respondents think that AI safety research should be prioritized and that ML institutions should conduct pre-publication review to assess potential harms. Being closer to the technology itself, AI/ML researchers are well placed to highlight new risks and develop technical solutions, so this novel attempt to measure their attitudes has broad relevance. The findings should help to improve how researchers, private sector executives, and policymakers think about regulations, governance frameworks, guiding principles, and national and international governance strategies for AI. This article appears in the special track on AI & Society.


Algorithmia's latest tools aim to solve ML governance challenges

#artificialintelligence

Algorithmia is today debuting new reporting tools which aim to solve machine learning (ML) governance challenges. Research conducted by the company found that the number one challenge organisations are facing with their deployments is governance. "We're still in the early days of ML governance, and organisations lack a clear roadmap or prescriptive advice for implementing it effectively in their own unique environments. Regulations are undefined and a changing and ambiguous regulatory landscape leads to uncertainty and the need for companies to invest significant resources to maintain compliance. Those that can't keep up risk losing their competitive edge. Furthermore, existing solutions are manual and incomplete. Even organisations that are implementing governance today are doing so with a patchwork of disparate tools and manual processes. Not only do such solutions require constant maintenance, but they also risk critical gaps in coverage."


Council Post: Are Your Model Governance Practices 'AI Ready'?

#artificialintelligence

For some industries, the use of AI and machine learning models is novel, but several industries--consumer finance and insurance in particular--have been building, using and governing models for decades. These industries have well-developed governance practices built largely around algorithmic, rule-based and other model technologies and regulations that predate AI models. Many of the enterprises I talk to are revisiting their model operationalization and governance processes and strengthening them with new capabilities to accommodate the increased use of AI/ML technologies. You can't govern what you can't see, so every model risk management (MRM) program must start with a centralized model inventory that includes all the metadata associated with every model throughout its life cycle, from development to deployment, modification and retirement. This model metadata, which documents the model's complete history and lineage, captures a broad range of elements including the specific software and libraries used in its development, the data used to train the model, the people involved in the model's development and maintenance and what they created or changed, the model's intended business use and KPIs, an explanation of the key influencing factors behind the model's decision-making, etc.


FLI Podcast- Artificial Intelligence: American Attitudes and Trends with Baobao Zhang - Future of Life Institute

#artificialintelligence

Artificial intelligence is already inextricably woven into everyday life, and its impact will only grow in the coming years. But while this development inspires much discussion among members of the scientific community, public opinion on artificial intelligence has remained relatively unknown. Artificial Intelligence: American Attitudes and Trends, a report published earlier in January by the Center for the Governance of AI, explores this question. Its authors relied on an in-depth survey to analyze American attitudes towards artificial intelligence, from privacy concerns to beliefs about U.S. technological superiority. Some of their findings--most Americans, for example, don't trust Facebook--were unsurprising. But much of their data reflects trends within the American public that have previously gone unnoticed. This month Ariel was joined by Baobao Zhang, lead author of the report, to talk about these findings. Zhang is a PhD candidate in Yale University's political science department and research affiliate with the Center for the Governance of AI at the University of Oxford. Her work focuses on American politics, international relations, and experimental methods. In this episode, Zhang spoke about her take on some of the report's most interesting findings, the new questions it raised, and future research directions for her team.


Artificial Intelligence: American Attitudes and Trends

#artificialintelligence

Americans consider many AI governance challenges to be important; prioritize data privacy and preventing AI-enhanced cyber attacks, surveillance, and digital manipulation We sought to understand how Americans prioritize policy issues associated with AI. Respondents were asked to consider five AI governance challenges, randomly selected from a test of 13 (see Appendix B for the text); the order these five were to each respondent was also randomized. After considering each governance challenge, respondents were asked how likely they think the challenge will affect large numbers of people 1) in the U.S. and 2) around the world within 10 years. We use scatterplots to visualize our survey results. In Figure 3.1, the x-axis is the perceived likelihood of the problem happening to large numbers of people in the U.S. In Figure 3.2, the x-axis is the perceived likelihood of the problem happening to large numbers of people around the world.


Artificial Intelligence: American Attitudes and Trends

#artificialintelligence

Advances in artificial intelligence (AI)1 could impact nearly all aspects of society: the labor market, transportation, healthcare, education, and national security. AI's effects may be profoundly positive, but the technology entails risks and disruptions that warrant attention. While technologists and policymakers have begun to discuss AI and applications of machine learning more frequently, public opinion has not shaped much of these conversations. In the U.S., public sentiments have shaped many policy debates, including those about immigration, free trade, international conflicts, and climate change mitigation. As in these other policy domains, we expect the public to become more influential over time.